Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

What would you use if you couldn't use your current distribution/operating system?

  • Linux
  • Windows
  • BSD
  • ChromeOS / Android
  • macOS / iOS
  • Open[DOS, Solaris, STEP, VMS]
  • I don't use a computer you insensitive clod!
  • Other (describe in comments)

[ Results | Polls ]
Comments:140 | Votes:180

posted by janrinok on Saturday March 07, @07:55PM   Printer-friendly

https://arstechnica.com/space/2026/03/congress-steps-up-pressure-on-nasa-to-support-private-space-stations/

Two months ago, a key staffer for Sen. Ted Cruz said in a public meeting that she was "begging" NASA to release a document that would kick off the second round of a competition among private companies to develop replacements for the International Space Station.

There has been no movement since then, as NASA has yet to release this "request for proposals." So this week, Cruz stepped up the pressure on the space agency with a NASA Authorization bill that passed his committee on Wednesday.

Regarding NASA's support for the development of commercial space stations, the bill mandates the following, within specified periods, of passage of the law:

  • Within 60 days, publicly release the requirements for commercial space stations in low-Earth orbit
  • Within 90 days, release the final "request for proposals" to solicit industry responses
  • Within 180 days, enter into contracts with "two or more" commercial providers for such stations

Cruz is trying to inject urgency into NASA as several private companies—including Axiom Space, Blue Origin, Vast, and Voyager—are finalizing designs for space stations. All have expressed a desire for clarity from NASA on how long the space agency would like its astronauts to stay on board, the types of scientific equipment needed, and much more. These are known as "requirements" in NASA parlance.

It's a difficult time for potential vendors as they seek to balance building a business case for habitats in low-Earth orbit with the uncertainty of NASA's requirements. The agency is viewed as the most important customer for their services, but not the exclusive one.

Amid this environment, some companies have succeeded in raising new capital. Last month, Axiom Space announced it had raised $350 million in financing, which included funding from the company's founder, Kam Ghaffarian. Also among the backers was 1789 Capital, which includes Donald Trump Jr. as a partner.

On Thursday, Vast announced its own $500 million funding to accelerate the development of its Haven space stations. Like Axiom Space, Vast's funding round also included investment from the Qatar Investment Authority, which is seeking opportunities to invest in commercial space.

Nominally, NASA plans to have one or more of these companies operating a commercial space station in low-Earth orbit by 2030. This is the date at which the US space agency has stated it will retire the aging laboratory, some elements of which are now nearly three decades old. However, some space policy officials have questioned whether any of the companies might be ready by then.

Cruz and other senators on the committee appear to share those concerns, as their legislation extends the International Space Station's lifespan from 2030 to 2032 (an extension must still be approved by international partners, including Russia). Moreover, the authorization bill states, "The Administrator shall not initiate the de-orbit of the ISS until the date on which a commercial low-Earth orbit destination has reached an initial operational capability."

With this legislation, the US Senate is making clear that it views a permanent human presence in low-Earth orbit as a high priority. This version of the authorization legislation must still be passed by the full Senate and work its way through the House of Representatives.

After the legislation passed the Commerce committee, Axiom Space said on social media that it welcomes the changes: "Axiom Space is proud to support the NASA Authorization Act of 2026. The bill is a clear indicator that Chairman @SenTedCruz and the Senate Commerce Committee are determined to ensure the success of the entire human spaceflight enterprise."

In an interview, the chief executive of Vast, Max Haot, said his company also welcomed the clarifying legislation—both for its language on commercial space stations as well as its reflection of the fact that NASA Administrator Jared Isaacman has been working overtime to set the Artemis lunar program on a better path for success.

"We are really impressed by what Jared has been able to do with the American space program and aligning all of the stakeholders," he said. "As it relates to commercial space stations, we were happy to see the renewed commitment to transition from the ISS to commercial alternatives."

Haot said there should not be a hard date for de-orbiting the International Space Station but that it should depend on the readiness of the commercial providers. He said Vast is confident that, should NASA issue an RFP and awards for private providers this year, Vast will be ready to support a continuous human presence in low-Earth orbit by the end of 2030.


Original Submission

posted by hubie on Saturday March 07, @03:07PM   Printer-friendly

We don't have a date for the upgraded service rollout, but it isn't likely until 2027:

Putting some numbers to the claims, we see that the V2 upgrade is touted to deliver ‘5G from space,’ which is also compatible with 100s of existing LTE phones. Don’t get the 100x and 20x claims seen across Starlink social media and web pages mixed up. The V2 satellites upgrade is said to provide “100x the data density” compared to the current V1 satellites, with “around 20x the throughput capability” per satellite.

Starlink also expects terrestrial operator partners, like T-Mobile in the U.S., to provide services which “seamlessly transition between satellite and terrestrial networks without interruption or degradation in service.” Previous Starlink announcements point to a goal of peak speeds of 150 Mbps per user becoming realistic with the rollout of the V2 satellites.

SpaceX is currently planning up to 15,000 new satellites to power its ‘5G from space’ goal. Starship’s progress at putting the larger, more capable V2 satellites into space will impact the availability window of the enhanced service, but some V2 Mini satellites are already being launched to help bridge the gap.

Thus, early 2027 looks most likely to be when the initial V2 service will be tested in the early rollout stage.


Original Submission

posted by hubie on Saturday March 07, @10:21AM   Printer-friendly

Microscopic crystals extracted from meteorites could help settle a debate about the birth of our patch of the Milky Way:

The standard story of the origin of our solar system has gone like this: 4.6 billion years ago, a giant cloud of dust hung frozen in space. Then the explosion of a nearby star caused part of that dust cloud to collapse. Pulled by gravity toward a central point, the dust coalesced into a radiating ball of hydrogen and helium about 1.4 million kilometers in diameter — what would become our sun. The remainder, which fell into orbit, collected into our solar system's planets, along with a mess of asteroids and other cosmic leftovers.

To test the validity of this story, researchers need to peer back in time to the solar system's first moments and beyond. And the cosmochemist Nan Liu has a way to do that: Locked in a safe on her desk at Boston University's Institute for Astrophysical Research is a shard of meteorite flecked with material older than the sun.

[...] Over the past decade or so, scientists have used meteorites like Liu's to challenge the story of how the solar system formed. Instead of a supernova, the solar system and everything in it might owe its existence to a more placid-sounding cosmic scenario: Maybe our solar system cobbled itself together from the winds blown off of a gargantuan star. New studies of presolar grains could offer a way to determine whether this new story is correct.

Scientists got their first clue about what could have triggered the formation of the solar system when a fireball appeared over Mexico in 1969. The now-famous Allende meteorite spread its debris over more than 500 square kilometers.

In 1976, researchers reported that samples from Allende contained a surprise: an unexpectedly large amount of a stable isotope called magnesium-26. They proposed that the meteorite formed with an abundance of aluminum-26, which is radioactive and leaves behind magnesium-26 when it decays.

Yet aluminum-26 was not known to be a normal component of the interstellar medium — the dusty space between stars that would have provided the materials for Allende. Ordinary stars don't make that particular isotope. "Most of these isotopes as we observe them in the early solar system, they were just the natural product of galactic chemical evolution," said Maria Lugaro, an astrophysicist at the Konkoly Thege Miklós Astronomical Institute in Hungary. "The most important exception is aluminum-26."

So where'd it come from? In 1977, two eminent astrophysicists proposed that the anomalous aluminum likely came from a nearby supernova explosion. Other phenomena can produce aluminum-26, but the supernova shock wave could also have caused the collapse of the cloud. With a single event, astronomers could explain how two rare occurrences — the injection of aluminum-26 and the formation of a new solar system — happened at virtually the same moment. "Everybody felt that we needed something to trigger the collapse," said Vikram Dwarkadas, an astronomer at the University of Chicago.

The supernova trigger remained the favored scenario for decades, supported by detailed astrophysical models, as well as further measurements of enriched magnesium-26 in pristine meteorites. But over the past decade or so, that view has run up against other measurements that don't seem to match. The problem: The solar system has an iron deficiency.

Supernovas don't just make aluminum. Any nearby supernova would likely also have injected lots of the radioactive isotope iron-60. Therefore, if a supernova launched the formation of the solar system, "we should see quite high initial [iron-60] abundances in the early-formed objects," wrote Linru Fang, a cosmochemist at the University of Copenhagen, in an email.

[...] Researchers have come up with explanations for the missing iron. "Meteoricists are famously argumentative folks," wrote Alan Boss, an astronomer at Carnegie Science in Washington, D.C., in an email. "There always seems to be a counterexample to anything someone claims to be the case."

For instance, the aluminum could have exploded out of the supernova, while the iron — coming from deeper in the star's core — could have fallen back into the dead star. Or the explosion could have come from a quirky supernova that didn't generate iron-60 at all. It could also be that iron-60 wasn't distributed evenly in the cloud, which could mean measurements from individual meteorites aren't giving us the full picture.

Dwarkadas dismisses these explanations as "hand-waving" attempts to fine-tune the models to match the data rather than finding a more general solution. "Many people seem to accept the idea that it's not a supernova," he said.

But if the solar system didn't start with a supernova, where did it get all that aluminum?

A possibility many researchers now favor is that the aluminum-26 was delivered on the winds of a Wolf-Rayet star.

Compared to our sun, a Wolf-Rayet star is much shorter-lived, dozens of times larger, and thousands of times as luminous. A star becomes a Wolf-Rayet star when its outer hydrogen shell is stripped away, either by the gravitational attraction of another star or by the strength of its own solar winds.

A Wolf-Rayet star's exposed core can send out solar winds at speeds of up to 3,000 kilometers a second. "It basically sweeps up the surrounding material like a snowplow," Dwarkadas said. That swept-up material forms a shell around the star that can be 100 light-years across. The shell, which creates a bubble around the Wolf-Rayet star, is tens of thousands of times denser than the surrounding interstellar medium.

The shell contains enough material to build a solar system. It should contain a lot of aluminum-26, and — crucially — it should contain very little iron-60. "I'm looking for a star that produces only aluminum-26," Lugaro said. "The place where we can make only aluminum-26 is in the winds of these very massive stars."

Astronomers have observed suns forming within the shells of Wolf-Rayet stars, Dwarkadas said. By his estimate, as much as 16% of all sun-size stars in our galaxy could have formed this way. "If it's true, there's no reason it should be true only for our solar system," he said. "Ours will not be unique."

Dwarkadas and his colleagues have laid out perhaps the most complete model for how the solar winds of a Wolf-Rayet star could have blasted aluminum-26 into our solar system as it formed. Afterward, the Wolf-Rayet star, with a lifetime of only a few million years, would most likely have collapsed into a black hole, although evidence for this would be long gone, Dwarkadas said.

There are problems with the Wolf-Rayet idea, Lugaro said. For instance, a Wolf-Rayet star creates such an energetic environment that it should have torn our newly formed solar system apart.

Boss still favors the theory that our cloud of dust was ignited by a supernova. Lugaro does not. "At the moment, from the nuclear-physics point of view," she said, "I favor the winds of the Wolf-Rayet stars." However, she said, new information could change her mind next week. "This is a problem that needs to be looked at from different angles. We are still fighting a bit about this."


Original Submission

posted by hubie on Saturday March 07, @05:40AM   Printer-friendly

You've heard of C++ and Windows, C and Linux. How about HolyC and TempleOS?

Tech loves a clean narrative; Genius builds the thing, the thing changes the world, everyone claps, and roll credits. The story of Terry A. Davis refuses to behave that way. Because yes, he built an entire operating system largely by himself. Yes, he wrote his own programming language to go with it. Yes, the technical achievement still makes seasoned developers raise an eyebrow and quietly mutter, "okay, that's... a lot."

But this is not a triumphant startup story. It's messier than that. More human. And, at points, genuinely uncomfortable to sit with. TempleOS didn't come out of a polished lab with venture funding and a product roadmap. It came out of one man's apartment, one man's conviction, and one man's increasingly fragile grip on reality.

See also: TempleOS Creator Passes


Original Submission

posted by hubie on Saturday March 07, @12:53AM   Printer-friendly

Jon Retting has released vscreen, a Rust service that gives AI agents a full Chromium browser with live WebRTC streaming — you see exactly what the AI sees in real-time and can take over mouse and keyboard at any point. The project provides 63 MCP (Model Context Protocol) tools for browser automation: navigation, screenshots, element discovery, cookie/CAPTCHA handling, and multi-agent coordination via lease-based locking.

Built from scratch in Rust — not a Puppeteer wrapper — the codebase is ~31,000 lines across 8 crates with unsafe forbidden, 510+ tests, 3 fuzz targets, and supply chain auditing via cargo-deny. Available as pre-built Linux binaries and Docker images. Source-available, non-commercial license.

https://github.com/jameswebb68/vscreen
https://dev.to/lowjax/vscreen-deep-dive-how-63-mcp-tools-let-ai-agents-actually-use-the-internet-4gij
https://dev.to/lowjax/i-built-a-tool-that-lets-ai-agents-browse-the-real-internet-and-you-can-watch-them-do-it-2fff


Original Submission

posted by hubie on Friday March 06, @08:11PM   Printer-friendly
from the was-it-ever-a-secret? dept.

Total anonymity online is impossible, and it's dangerous to claim otherwise:

To be fair, not all VPN companies are pushing this false narrative -- CNET’s picks for the best VPNs are all very clear about what their services can and can’t do. But too many companies, including a few high-profile VPN providers, continue to keep the myth alive.

Even a VPN provider as established and well-known as CyberGhost continues to promote this dangerous falsehood. The company boldly states on its website that its service can help users “go completely anonymous and surf the internet without privacy worries,” and that they can “enjoy complete anonymity & protection online” with CyberGhost.

To be fair, CyberGhost does mention in an FAQ section tucked away at the bottom of its home page that “no VPN service can make you 100% anonymous online,” but the messaging from the company is nonetheless confusing and avoidable.

This isn't just a case of harmless exaggerated marketing -- it's reckless. Using a VPN while under the impression that it's a silver bullet for online anonymity can put you in a bad spot, even if you have nothing to hide. If you use a social media platform to share sensitive information online with someone, or if you're an investigative journalist in a region whose government practices oppressive digital surveillance, you'll still be at risk, even with a VPN.

You can't simply throw good judgment and all other basic privacy principles out the window just because you think your VPN gives you an all-encompassing invisibility cloak on the internet whenever you switch it on. It's time to dial back the hyperbole and be clear about how a VPN can and can't protect you online, starting with why all this talk about data matters.

[...] Whenever you’re logged in to a service like Google, Facebook, TikTok, Instagram, X, Amazon or Netflix, all of your activity on those platforms can be tracked by the companies and linked directly back to you. Data related to the search terms you enter, links you click on, videos you watch, items you purchase, ads you interact with and content you share are all collected and used to create a detailed profile on your interests and online habits.

Additionally, personal information such as your name, username, address, payment data and email address, along with unique identifiers like your IP address, browser type, device type and operating system can all be tracked.

[...] Yet none of this stops some VPN providers from saying that VPNs can make you totally anonymous online.

In reality, VPNs are just a small piece of the much greater online privacy and security puzzle. VPNs like Mullvad and Windscribe let you sign up and use their services without supplying any personal information whatsoever -- which is about as close as you can get to anonymity with a VPN. Other providers like Proton, NordVPN, ExpressVPN and Surfshark offer additional privacy and security services on top of a VPN that you can bundle under a single subscription, which can help you better round out your cybersecurity toolkit.

Everyday citizens simply looking to boost their online protections should be fine with a VPN, password manager and antivirus. But if you're an activist, lawyer, whistleblower, investigative journalist or anyone else with critical privacy needs, there's a lot more you should do to protect yourself and become as anonymous as possible online.

[...] While neither a VPN nor any single privacy or security tool can guarantee you anonymity, a well-rounded cybersecurity toolkit, some strategic actions and a little bit of common sense can go a long way toward protecting your privacy.


Original Submission

posted by hubie on Friday March 06, @03:30PM   Printer-friendly

By testing agent-to-agent interactions, researchers observed catastrophic system failures. Here's why that's bad news for everyone:

An increasing body of work points to the risks of agentic AI, such as last week's report by MIT and collaborators that documented a lack of oversight, measurement, and control for agents.

However, what happens when one AI agent meets another? Evidence suggests things can turn even worse, according to a report published this week by scholars at Stanford University, Northwestern, Harvard, Carnegie Mellon, and several other institutions.

The result of agent-to-agent interaction was the destruction of server computers, denial-of-service attacks, vast over-consumption of computing resources, and the "systematic escalation of minor errors into catastrophic system failures."

"When agents interact with each other, individual failures compound and qualitatively new failure modes emerge," wrote lead author Natalie Shapira of Northeastern University and collaborators in the report, 'Agents of Chaos.'

"This is a critical dimension of our findings," Shapira and team wrote, "because multi-agent deployment is increasingly common and most existing safety evaluations focus on single-agent settings."

The findings are especially timely given that multi-agent interactions have burst into the mainstream of AI with the recent fervor over the bot social platform Moltbook. That kind of multi-agent hub makes it possible for agentic AI systems to exchange data and carry out instructions on one another that weren't previously possible, largely without any humans in the loop.

The report, which can be downloaded from the arXiv pre-print server, describes a 'red team' test of interacting agents over two weeks, with attempts to find weaknesses in a system by simulating hostile behavior.

What emerged in the research is a system in which humans are mostly absent. Bots send information back and forth, and instruct each other to carry out commands.

Among the many disturbing findings are agents that spread potentially destructive instructions to other agents, agents that mutually reinforce bad security practices via an echo chamber, and agents that engage in potentially endless interactions, consuming vast system resources with no clear purpose.

[...] The premise of the researchers' work is that agentic AI can carry out actions without a person typing in a prompt, as you do with ChatGPT. Agentic AI can be given access to various resources through which to carry out actions. Those resources include email accounts and other communication channels, such as Discord, Signal, Telegram, and more. As they use email and these channels, bots can not only carry out actions but also communicate with and act on other bots.

[...] Among fundamental issues, the underlying LLMs treated both data and commands at the prompt as the same thing, leading to prompt injection.

In the interactions, the authors identified a boundary problem. Agents disclosed "artifacts," such as information obtained from email servers or Discord, without an apparent sense of who should see the information. At the heart of that approach was a lack of a "reliable private deliberation surface in deployed agent stacks." In short, an individual LLM may or may not disclose "reasoning" steps at the prompt. But agents seem to lack well-crafted guardrails and will disclose information in many ways.

The agents also had "no self-model," by which they mean, "agents in our study take irreversible, user-affecting actions without recognizing they are exceeding their own competence boundaries." An example of this issue is when two agents agree to engage in a back-and-forth dialogue without a human, pursuing that approach indefinitely, exhausting system resources.

In an infinite-loop scenario, agents may interact indefinitely, leading to an "infinite loop" and consequent exhaustion of system resources.

"The agents exchanged ongoing messages over the course of at least nine days," the researchers wrote, "consuming approximately 60,000 tokens at the time of writing." Tokens are how OpenAI and others price access to their cloud APIs. Consuming more tokens inflates AI costs, which is already a big issue in an era of rising prices.

The bottom line is that someone has to take responsibility for what is contingent and what is fundamental, and find solutions for both.

Right now, there is no responsibility for an agent per se, noted the researchers: "These behaviors expose a fundamental blind spot in current alignment paradigms: while agents and surrounding humans often implicitly treat the owner as the responsible party, the agents do not reliably behave as if they are accountable to that owner."

That concern means everyone building these systems must deal with the lack of responsibility: "We argue that clarifying and operationalizing responsibility may be a central unresolved challenge for the safe deployment of autonomous, socially embedded AI systems."

arXiv link: https://arxiv.org/abs/2602.20021


Original Submission

posted by janrinok on Friday March 06, @10:43AM   Printer-friendly

"Ultimately, we want to build a fleet of electric harvesters"

The Moon has received a lot of attention in recent months, particularly the surface of Earth's cold and dusty companion.

This has largely been driven by a decision from SpaceX founder Elon Musk to pivot, at least in the near term, from Mars to lunar surface activities and the potential for using material there to build large satellites. But there has been a notable shift from NASA, too, which has started talking a lot more about building up elements of a base on the surface rather than an orbiting space station known as the Gateway.

In short, the world's most successful space company and the largest space agency have both increased their lunar ambitions, suggesting a greater frequency of missions to the Moon in the coming years.

For companies that have long-term business plans focused around the surface of the Moon, these are very positive developments. And two of these lunar startups, Astrolab and Interlune, announced Tuesday morning they are forming a partnership amid this favorable environment.

Astrolab is one of three firms vying to build rovers for NASA's scientific activities on the surface of the Moon, as well as to provide transportation for its astronauts. But the company has been working with commercial customers as well, and one of the most important long-term ones could be a Helium-3 mining company called Interlune.

"Ultimately, we want to build a fleet of electric harvesters that will go to the Moon and excavate, extract and separate Helium-3 from the lunar regolith," said Interlune chief executive Rob Meyerson. "The FLEX Rover is a great platform to go do that."

This is not the first time the two companies have worked together. Last August, Interlune announced that it would fly a multispectral camera on a smaller prototype rover being built by Astrolab. This camera will be used to estimate helium-3 quantities and concentration in Moon dirt, or regolith.

This FLIP rover, about the size of a go-kart, is due to launch later this year on a lunar lander built by Astrobotic. It will fly atop the Griffin lander, taking the place of NASA's VIPER rover, which has been moved to another spacecraft.

The mission will therefore be a learning exercise for both Astrolab, in testing out its software and other features of a small lunar rover, as well as Interlune, which will seek to ground truth data about the concentration of Helium-3 that has previously been estimated from samples returned to Earth during the Apollo program.

In addition to FLIP, Astrolab is developing a larger rover, FLEX, that is about the size of a minivan. This vehicle has a horseshoe-shaped chassis that can accommodate about 3 cubic meters of payload. This allows for a broad array of activities, from carrying multiple scientific instruments across the Moon and providing a long-distance rover for two astronauts, to moving large equipment or, in the case of Interlune, serving as a mobile harvester.

"Our thesis is to make the most versatile platform possible so we can serve a wide array of customers and achieve NASA's goal of being one customer among many," said Jaret Matthews, Astrolab founder and chief executive, in an interview. "So we have essentially a modular approach that allows us to either pick up cargo or implements or payloads. And so in this case, the excavating equipment that Interlune is developing would basically go under the belly of the rover."

The companies did not say when they are scheduled to deploy an initial harvester, but both are working toward that goal. It is likely that a FLEX rover will be one of the payloads on the first SpaceX Starship mission to the lunar surface—probably, but not certainly, the lunar demo mission without crew—planned to fly to the Moon in 2027 or 2028. And Interlune has been working with an industrial equipment manufacturer, Vermeer, to build a harvester to excavate and separate Helium-3 from the lunar surface.

Helium-3 does not occur naturally on Earth, and it exists in only very limited quantities from nuclear weapons tests, nuclear reactors, and radioactive decay. It has several applications, but the most near-term use is in cryogenics, Meyerson believes. The company has already announced contracts for the sale of thousands of liters for very low-temperature refrigeration. But first it must demonstrate the ability to mine and refine the material, which exists in small quantities in lunar soil, and get it back to Earth. This is a difficult challenge, of course, but having partners to move across the Moon and get to and from there helps a lot.

Astrolab and Interlune plan to undertake prototype testing of a mobile harvester in Houston, where there is a new commercial facility known as the Texas A&M University Space Institute. This institute is currently under construction at NASA's Johnson Space Center as the space agency seeks to broaden support for commercial space activities.


Original Submission

posted by janrinok on Friday March 06, @05:58AM   Printer-friendly

https://www.os2museum.com/wp/dos-memory-management/

The memory management in DOS is simple, but that simplicity may be deceptive. There are several rather interesting pitfalls that programming documentation often does not mention.

DOS 1.x (1981) had no explicit memory management support. It was designed to run primarily on machines with 64K RAM or less, or not too much more (the original PC could not have more than 64K RAM on the system board, although RAM expansion boards did exist). A COM program could easily access (almost) 64K memory when loaded, and many programs didn't rely on even having that much. In fact the early PCs often only had 64K or 48K RAM installed. But the times were rapidly changing.

DOS 2.0 was developed to support the IBM PC/XT (introduced in March 1983), which came with 128K RAM standard, and models with 256K appeared soon enough. Even the older PCs could be upgraded with additional RAM, and DOS needed to have some mechanism to deal with that extra memory.

The DOS memory management was probably written sometime around summer 1982, and it meshed with the newly added process management functions (EXEC/EXIT/WAIT)—allocated memory is owned by the current process, and gets freed when that process terminates. Note that some versions of the memory manager source code (ALLOC.ASM) include a comment that says 'Created: ARR 30 March 1983'. That cannot possibly be true because by the end of March 1983, PC DOS 2.0 was already released, and included the memory management support. The DOS 2.0 memory management functions were already documented in the PC DOS 2.0 manual dated January 1983.


Original Submission

posted by janrinok on Friday March 06, @01:10AM   Printer-friendly
from the too-bad-YOU-aren't-able-to-get-it dept.

A 33% leap in capacity in six months is an impressive feat:

According to Micron, the new sticks are the first to employ its 32 Gb (4 GB) LPDDR5X monolithic dies, where "monolithic" means all the memory and relevant circuitry are part of a single die.

In an AI future where context is literally everything, every gigabyte of memory closer to the xPUs in a system matters, and Micron's advancement today will doubtless be found in massive AI server installations worldwide as companies allocate hundreds of billions of dollars of capex in the race toward AI supremacy.

The SOCAMM2 form factor is the result of a partnership between Nvidia and memory makers Micron, Samsung, and SK hynix. The SOCAMM standard was originally designed by Nvidia, but the accelerator mogul reportedly had trouble getting the modules to operate without overheating on high-density servers. CEO Jensen Huang wisely teamed up with the folks who make computer memory for a living, resulting in SOCAMM2s with growing density and lower power consumption.


Original Submission

posted by janrinok on Thursday March 05, @08:25PM   Printer-friendly

https://codon.org.uk/~mjg59/blog/p/to-update-blobs-or-not-to-update-blobs/

A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x86. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.

A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.

I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.

Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?

You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?

If it’s in ROM and it turns out to be hostile then nobody can fix it ever

The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?

Designing hardware where you’re able to provide updated code and nobody else can is just a dick move. We shouldn’t encourage vendors who do that.

Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.

Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.

Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.

I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.

And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!

But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.

My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.

It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.

Code that runs on the CPU before the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary 

Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted. 

I don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either. 

There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me. 


Original Submission

posted by janrinok on Thursday March 05, @03:42PM   Printer-friendly
from the no-human-hand-no-rights dept.

Supreme court declines to hear dispute over copyright in regards to AI generated art. So AI generated art is not copyrightable. If that is the case are other things generated by AI? Code?

Computer scientist Stephen Thaler has once again failed before the US Supreme Court. The Supreme Court of the USA refused on Monday to address the question of whether art created by artificial intelligence (AI) can be protected by copyright under US law and dismissed a lawsuit by Thaler. The case has been dealt with by various courts over several years.

From Reuters:

The U.S. Supreme Court declined on Monday to take up the ​issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning ‌away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.

Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator.

Thaler, of St. Charles, Missouri, applied for ⁠a federal copyright registration in 2018 covering "A Recent Entrance to Paradise," visual art he said his AI ​technology "DABUS" created. The image shows train tracks entering a portal, surrounded by what appears to be green and ​purple plant imagery.

The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright.

U.S. President Donald Trump's administration had urged the Supreme Court not to hear Thaler's appeal.

The Copyright Office has ​separately rejected bids by artists for copyrights on images generated by the AI system Midjourney. Those artists argued that ​they were entitled to copyrights for images they created with AI assistance - unlike Thaler, who said his system created "A Recent ‌Entrance to ⁠Paradise" independently.

A federal judge in Washington upheld the office's decision in Thaler's case in 2023, writing that human authorship is a "bedrock requirement of copyright." The U.S. Court of Appeals for the District of Columbia Circuit affirmed the ruling in 2025.

Thaler's lawyers told the Supreme Court in a filing that his case was of "paramount importance" considering the ​rapid rise of generative AI.

With ​a refusal by the ⁠court to hear the appeal, Thaler's lawyers said, "even if it later overturns the Copyright Office's test in another case, it will be too late. The Copyright Office ​will have irreversibly and negatively impacted AI development and use in the creative ​industry during ⁠critically important years."

https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright
https://www.reuters.com/legal/government/us-supreme-court-declines-hear-dispute-over-copyrights-ai-generated-material-2026-03-02/
https://www.heise.de/en/news/Copyright-dispute-over-AI-generated-art-US-Supreme-Court-dismisses-case-11196323.html


Original Submission

posted by janrinok on Thursday March 05, @10:52AM   Printer-friendly

https://www.tomshardware.com/networking/drones-attack-several-aws-middle-east-region-data-centers-amid-iran-war-leading-to-outages-service-health-been-disrupted-after-power-cut-due-to-fire-risk

As per the quote above, the UAE data center was impacted most severely by the drones. From broader reporting of the conflict, we assume these drone strikes are part of Iran’s response to U.S. Operation Epic Fury and Israeli Operation Roaring Lion strikes on Iranian targets over the weekend. Both the UAE and Bahrain data centers were hit by drones in the early hours of March 1. Whether Iran purposely targeted AWS facilities, we cannot say for certain.

While engineers are working to safely restore the full gamut of AWS services, the firm says that it “strongly recommend[s] that customers with workloads running in the Middle East take action now to migrate those workloads to alternate AWS Regions.” It would be wise to enact disaster recovery plans, recover from remote backups stored in other Regions, and update applications to direct traffic away from the UAE, for now, too.

As with ME-CENTRAL-1, above, AWS is recommending users migrate or replicate their ME-SOUTH-1 Region data to another AWS Region.

These are some of the first ‘tech’ impacts we have seen precipitated by the 2026 Iran Conflict. They surely won’t be the last, with shipping, the costs of raw materials, and energy resources already rapidly inflating due to emerging geopolitical risks and pressures.


Original Submission

posted by janrinok on Thursday March 05, @06:11AM   Printer-friendly

https://arstechnica.com/space/2026/03/no-fooling-nasa-targets-april-1-for-artemis-ii-launch-to-the-moon/

NASA has fixed the problem that forced the removal of the rocket for the Artemis II mission from its launch pad last month, but it will be a couple of weeks before officials are ready to move the vehicle back into the starting blocks at Kennedy Space Center in Florida.

The 322-foot-tall (98-meter) rocket could have launched as soon as this week after it passed a key fueling test on February 21. During that test, NASA loaded the Space Launch System rocket with super-cold propellants without any major problems, apparently overcoming a persistent hydrogen leak that prevented the mission from launching in early February.

However, another problem cropped up just one day after the successful fueling demo. Ground teams were unable to flow helium into the rocket's upper stage. Unlike the connections to the core stage, which workers can repair at the launch pad, the umbilical lines leading to the upper stage higher up the rocket are only accessible inside the cavernous Vehicle Assembly Building (VAB) at Kennedy.

Mission managers quickly decided to roll the rocket back to the assembly building for troubleshooting. The rocket returned to the VAB on February 25, and within a week, engineers found the source of the helium flow issue. Inspections revealed that a seal in the quick disconnect, through which helium flows from ground systems into the rocket, was obstructing the pathway, according to NASA.

"The team removed the quick disconnect, reassembled the system, and began validating the repairs to the upper stage by running a reduced flow rate of helium through the mechanism to ensure the issue was resolved," NASA said in an update posted Tuesday. "Engineers are assessing what allowed the seal to become dislodged to prevent the issue from recurring."

NASA is not expected to return the SLS rocket and Orion spacecraft to the launch pad until later this month. Inside the VAB, technicians will complete several other tasks to "refresh" the rocket for the next series of launch opportunities.

This work will include activating a new set of flight termination system batteries for the rocket's range safety destruct system, which would be used to destroy the vehicle if it veered off course during launch. Workers will also replace flight batteries on the SLS core stage, upper stage, and solid rocket boosters, and recharge the batteries on the Orion spacecraft's launch abort system, NASA said. At the bottom of the rocket, crews will replace a seal on the core stage liquid oxygen feed line.

NASA has not said whether the launch team will conduct another countdown rehearsal after it returns to Launch Complex 39B at Kennedy.

The first of five launch opportunities in early April is on April 1, with a two-hour launch window opening at 6:24 pm EDT (22:24 UTC). There are additional launch dates available on April 3, 4, 5, and 6. Each launch period has about five potential launch dates after accounting for several constraints on the mission trajectory, which will carry the Orion spacecraft and four astronauts around the far side of the Moon and back to Earth.

Artemis II will be the first human spaceflight to the vicinity of the Moon since 1972 and is the first crew mission for NASA's Artemis program, which aims to land astronauts on the lunar surface as early as 2028.


Original Submission

posted by Fnord666 on Thursday March 05, @12:23AM   Printer-friendly
from the Nicht-ihre-Papiere-bitte dept.

Web sites are increasingly trying to glean additional personally identifiable information from visitors in the name of authentication. Some nefarious interests actually do have a goal of tracking every minute interaction and communication tied to a real-world identity. However, if the goal is authentication and not just the collection of information, then all that is not necessary. Cryptographer and professor, Matthew Green, has a few thoughts on cryptographic engineering, specifically an illustrated primer on Anonymous credentials. He states the question as being, how do we live in a world with routine age-verification and human identification, without completely abandoning our privacy?

This post has been on my back burner for well over a year. This has bothered me, because every month that goes by I become more convinced that anonymous authentication the most important topic we could be talking about as cryptographers. This is because I’m very worried that we’re headed into a bit of a privacy dystopia, driven largely by bad legislation and the proliferation of AI.

But this is too much for a beginning. Let’s start from the basics.

One of the most important problems in computer security is user authentication. Often when you visit a website, log into a server, access a resource, you (and generally, your computer) needs to convince the provider that you’re authorized to access the resource. This authorization process can take many forms. Some sites require explicit user logins, which users complete using traditional username and passwords credentials, or (increasingly) advanced alternatives like MFA and passkeys. Some sites that don’t require explicit user credentials, or allow you to register a pseudonymous account; however even these sites often ask user agents to prove something. Typically this is some kind of basic “anti-bot” check, which can be done with a combination of long-lived cookies, CAPTCHAs, or whatever the heck Cloudflare does: [...]

Again that naively assumes that elimination of privacy is not a specific goal, which adds an additional barrier to gaining acceptance for anonymous approaches.

Previously:
(2025) Passkeys Are Incompatible With Open-Source Software
(2024) VISA and Biometric Authentication
(2022) NIST Drafts Revised Guidelines for Digital Identification in Federal Systems
[...] and more.


Original Submission